Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Front Artif Intell ; 7: 1375482, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38525302

RESUMEN

Objective: Automated surgical step recognition (SSR) using AI has been a catalyst in the "digitization" of surgery. However, progress has been limited to laparoscopy, with relatively few SSR tools in endoscopic surgery. This study aimed to create a SSR model for transurethral resection of bladder tumors (TURBT), leveraging a novel application of transfer learning to reduce video dataset requirements. Materials and methods: Retrospective surgical videos of TURBT were manually annotated with the following steps of surgery: primary endoscopic evaluation, resection of bladder tumor, and surface coagulation. Manually annotated videos were then utilized to train a novel AI computer vision algorithm to perform automated video annotation of TURBT surgical video, utilizing a transfer-learning technique to pre-train on laparoscopic procedures. Accuracy of AI SSR was determined by comparison to human annotations as the reference standard. Results: A total of 300 full-length TURBT videos (median 23.96 min; IQR 14.13-41.31 min) were manually annotated with sequential steps of surgery. One hundred and seventy-nine videos served as a training dataset for algorithm development, 44 for internal validation, and 77 as a separate test cohort for evaluating algorithm accuracy. Overall accuracy of AI video analysis was 89.6%. Model accuracy was highest for the primary endoscopic evaluation step (98.2%) and lowest for the surface coagulation step (82.7%). Conclusion: We developed a fully automated computer vision algorithm for high-accuracy annotation of TURBT surgical videos. This represents the first application of transfer-learning from laparoscopy-based computer vision models into surgical endoscopy, demonstrating the promise of this approach in adapting to new procedure types.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38546527

RESUMEN

OBJECTIVE: The analysis of surgical videos using artificial intelligence holds great promise for the future of surgery by facilitating the development of surgical best practices, identifying key pitfalls, enhancing situational awareness, and disseminating that information via real-time, intraoperative decision-making. The objective of the present study was to examine the feasibility and accuracy of a novel computer vision algorithm for hysterectomy surgical step identification. METHODS: This was a retrospective study conducted on surgical videos of laparoscopic hysterectomies performed in 277 patients in five medical centers. We used a surgical intelligence platform (Theator Inc.) that employs advanced computer vision and AI technology to automatically capture video data during surgery, deidentify, and upload procedures to a secure cloud infrastructure. Videos were manually annotated with sequential steps of surgery by a team of annotation specialists. Subsequently, a computer vision system was trained to perform automated step detection in hysterectomy. Analyzing automated video annotations in comparison to manual human annotations was used to determine accuracy. RESULTS: The mean duration of the videos was 103 ± 43 min. Accuracy between AI-based predictions and manual human annotations was 93.1% on average. Accuracy was highest for the dissection and mobilization step (96.9%) and lowest for the adhesiolysis step (70.3%). CONCLUSION: The results of the present study demonstrate that a novel AI-based model achieves high accuracy for automated steps identification in hysterectomy. This lays the foundations for the next phase of AI, focused on real-time clinical decision support and prediction of outcome measures, to optimize surgeon workflow and elevate patient care.

3.
J Urol ; 211(4): 575-584, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38265365

RESUMEN

PURPOSE: The widespread use of minimally invasive surgery generates vast amounts of potentially useful data in the form of surgical video. However, raw video footage is often unstructured and unlabeled, thereby limiting its use. We developed a novel computer-vision algorithm for automated identification and labeling of surgical steps during robotic-assisted radical prostatectomy (RARP). MATERIALS AND METHODS: Surgical videos from RARP were manually annotated by a team of image annotators under the supervision of 2 urologic oncologists. Full-length surgical videos were labeled to identify all steps of surgery. These manually annotated videos were then utilized to train a computer vision algorithm to perform automated video annotation of RARP surgical video. Accuracy of automated video annotation was determined by comparing to manual human annotations as the reference standard. RESULTS: A total of 474 full-length RARP videos (median 149 minutes; IQR 81 minutes) were manually annotated with surgical steps. Of these, 292 cases served as a training dataset for algorithm development, 69 cases were used for internal validation, and 113 were used as a separate testing cohort for evaluating algorithm accuracy. Concordance between artificial intelligence‒enabled automated video analysis and manual human video annotation was 92.8%. Algorithm accuracy was highest for the vesicourethral anastomosis step (97.3%) and lowest for the final inspection and extraction step (76.8%). CONCLUSIONS: We developed a fully automated artificial intelligence tool for annotation of RARP surgical video. Automated surgical video analysis has immediate practical applications in surgeon video review, surgical training and education, quality and safety benchmarking, medical billing and documentation, and operating room logistics.


Asunto(s)
Prostatectomía , Procedimientos Quirúrgicos Robotizados , Humanos , Masculino , Inteligencia Artificial , Escolaridad , Próstata/cirugía , Prostatectomía/métodos , Procedimientos Quirúrgicos Robotizados/métodos , Grabación en Video
4.
Int J Mol Sci ; 25(2)2024 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-38256266

RESUMEN

Autism spectrum disorder (ASD) is a common condition with lifelong implications. The last decade has seen dramatic improvements in DNA sequencing and related bioinformatics and databases. We analyzed the raw DNA sequencing files on the Variantyx® bioinformatics platform for the last 50 ASD patients evaluated with trio whole-genome sequencing (trio-WGS). "Qualified" variants were defined as coding, rare, and evolutionarily conserved. Primary Diagnostic Variants (PDV), additionally, were present in genes directly linked to ASD and matched clinical correlation. A PDV was identified in 34/50 (68%) of cases, including 25 (50%) cases with heterozygous de novo and 10 (20%) with inherited variants. De novo variants in genes directly associated with ASD were far more likely to be Qualifying than non-Qualifying versus a control group of genes (p = 0.0002), validating that most are indeed disease related. Sequence reanalysis increased diagnostic yield from 28% to 68%, mostly through inclusion of de novo PDVs in genes not yet reported as ASD associated. Thirty-three subjects (66%) had treatment recommendation(s) based on DNA analyses. Our results demonstrate a high yield of trio-WGS for revealing molecular diagnoses in ASD, which is greatly enhanced by reanalyzing DNA sequencing files. In contrast to previous reports, de novo variants dominate the findings, mostly representing novel conditions. This has implications to the cause and rising prevalence of autism.


Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Humanos , Trastorno del Espectro Autista/genética , Secuenciación Completa del Genoma , Análisis de Secuencia de ADN , Biología Computacional
5.
Surg Endosc ; 37(11): 8818-8828, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37626236

RESUMEN

INTRODUCTION: Artificial intelligence and computer vision are revolutionizing the way we perceive video analysis in minimally invasive surgery. This emerging technology has increasingly been leveraged successfully for video segmentation, documentation, education, and formative assessment. New, sophisticated platforms allow pre-determined segments chosen by surgeons to be automatically presented without the need to review entire videos. This study aimed to validate and demonstrate the accuracy of the first reported AI-based computer vision algorithm that automatically recognizes surgical steps in videos of totally extraperitoneal (TEP) inguinal hernia repair. METHODS: Videos of TEP procedures were manually labeled by a team of annotators trained to identify and label surgical workflow according to six major steps. For bilateral hernias, an additional change of focus step was also included. The videos were then used to train a computer vision AI algorithm. Performance accuracy was assessed in comparison to the manual annotations. RESULTS: A total of 619 full-length TEP videos were analyzed: 371 were used to train the model, 93 for internal validation, and the remaining 155 as a test set to evaluate algorithm accuracy. The overall accuracy for the complete procedure was 88.8%. Per-step accuracy reached the highest value for the hernia sac reduction step (94.3%) and the lowest for the preperitoneal dissection step (72.2%). CONCLUSIONS: These results indicate that the novel AI model was able to provide fully automated video analysis with a high accuracy level. High-accuracy models leveraging AI to enable automation of surgical video analysis allow us to identify and monitor surgical performance, providing mathematical metrics that can be stored, evaluated, and compared. As such, the proposed model is capable of enabling data-driven insights to improve surgical quality and demonstrate best practices in TEP procedures.


Asunto(s)
Hernia Inguinal , Laparoscopía , Humanos , Hernia Inguinal/cirugía , Laparoscopía/métodos , Inteligencia Artificial , Flujo de Trabajo , Procedimientos Quirúrgicos Mínimamente Invasivos , Herniorrafia/métodos , Mallas Quirúrgicas
6.
Front Neurol ; 14: 1151835, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37234784

RESUMEN

Objective: To utilize whole exome or genome sequencing and the scientific literature for identifying candidate genes for cyclic vomiting syndrome (CVS), an idiopathic migraine variant with paroxysmal nausea and vomiting. Methods: A retrospective chart review of 80 unrelated participants, ascertained by a quaternary care CVS specialist, was conducted. Genes associated with paroxysmal symptoms were identified querying the literature for genes associated with dominant cases of intermittent vomiting or both discomfort and disability; among which the raw genetic sequence was reviewed. "Qualifying" variants were defined as coding, rare, and conserved. Additionally, "Key Qualifying" variants were Pathogenic/Likely Pathogenic, or "Clinical" based upon the presence of a corresponding diagnosis. Candidate association to CVS was based on a point system. Results: Thirty-five paroxysmal genes were identified per the literature review. Among these, 12 genes were scored as "Highly likely" (SCN4A, CACNA1A, CACNA1S, RYR2, TRAP1, MEFV) or "Likely" (SCN9A, TNFRSF1A, POLG, SCN10A, POGZ, TRPA1) CVS related. Nine additional genes (OTC, ATP1A3, ATP1A2, GFAP, SLC2A1, TUBB3, PPM1D, CHAMP1, HMBS) had sufficient evidence in the literature but not from our study participants. Candidate status for mitochondrial DNA was confirmed by the literature and our study data. Among the above-listed 22 CVS candidate genes, a Key Qualifying variant was identified in 31/80 (34%), and any Qualifying variant was present in 61/80 (76%) of participants. These findings were highly statistically significant (p < 0.0001, p = 0.004, respectively) compared to an alternative hypothesis/control group regarding brain neurotransmitter receptor genes. Additional, post-analyses, less-intensive review of all genes (exome) outside our paroxysmal genes identified 13 additional genes as "Possibly" CVS related. Conclusion: All 22 CVS candidate genes are associated with either cation transport or energy metabolism (14 directly, 8 indirectly). Our findings suggest a cellular model in which aberrant ion gradients lead to mitochondrial dysfunction, or vice versa, in a pathogenic vicious cycle of cellular hyperexcitability. Among the non-paroxysmal genes identified, 5 are known causes of peripheral neuropathy. Our model is consistent with multiple current hypotheses of CVS.

7.
Curr Top Med Chem ; 22(8): 686-698, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35139798

RESUMEN

An urgent need exists for a rapid, cost-effective, facile, and reliable nucleic acid assay for mass screening to control and prevent the spread of emerging pandemic diseases. This urgent need is not fully met by current diagnostic tools. In this review, we summarize the current state-of-the-art research in novel nucleic acid amplification and detection that could be applied to point-of-care (POC) diagnosis and mass screening of diseases. The critical technological breakthroughs will be discussed for their advantages and disadvantages. Finally, we will discuss the future challenges of developing nucleic acid-based POC diagnosis.


Asunto(s)
Ácidos Nucleicos , Técnicas de Amplificación de Ácido Nucleico , Pandemias , Sistemas de Atención de Punto
8.
Sci Rep ; 10(1): 22208, 2020 12 17.
Artículo en Inglés | MEDLINE | ID: mdl-33335191

RESUMEN

AI is becoming ubiquitous, revolutionizing many aspects of our lives. In surgery, it is still a promise. AI has the potential to improve surgeon performance and impact patient care, from post-operative debrief to real-time decision support. But, how much data is needed by an AI-based system to learn surgical context with high fidelity? To answer this question, we leveraged a large-scale, diverse, cholecystectomy video dataset. We assessed surgical workflow recognition and report a deep learning system, that not only detects surgical phases, but does so with high accuracy and is able to generalize to new settings and unseen medical centers. Our findings provide a solid foundation for translating AI applications from research to practice, ushering in a new era of surgical intelligence.

9.
Genet Med ; 21(12): 2807-2814, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31164752

RESUMEN

PURPOSE: Phenotype information is crucial for the interpretation of genomic variants. So far it has only been accessible for bioinformatics workflows after encoding into clinical terms by expert dysmorphologists. METHODS: Here, we introduce an approach driven by artificial intelligence that uses portrait photographs for the interpretation of clinical exome data. We measured the value added by computer-assisted image analysis to the diagnostic yield on a cohort consisting of 679 individuals with 105 different monogenic disorders. For each case in the cohort we compiled frontal photos, clinical features, and the disease-causing variants, and simulated multiple exomes of different ethnic backgrounds. RESULTS: The additional use of similarity scores from computer-assisted analysis of frontal photos improved the top 1 accuracy rate by more than 20-89% and the top 10 accuracy rate by more than 5-99% for the disease-causing gene. CONCLUSION: Image analysis by deep-learning algorithms can be used to quantify the phenotypic similarity (PP4 criterion of the American College of Medical Genetics and Genomics guidelines) and to advance the performance of bioinformatics pipelines for exome analysis.


Asunto(s)
Biología Computacional/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Análisis de Secuencia de ADN/métodos , Algoritmos , Bases de Datos Genéticas , Aprendizaje Profundo , Exoma/genética , Femenino , Genómica , Humanos , Masculino , Fenotipo , Programas Informáticos
10.
Nat Med ; 25(1): 60-64, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30617323

RESUMEN

Syndromic genetic conditions, in aggregate, affect 8% of the population1. Many syndromes have recognizable facial features2 that are highly informative to clinical geneticists3-5. Recent studies show that facial analysis technologies measured up to the capabilities of expert clinicians in syndrome identification6-9. However, these technologies identified only a few disease phenotypes, limiting their role in clinical settings, where hundreds of diagnoses must be considered. Here we present a facial image analysis framework, DeepGestalt, using computer vision and deep-learning algorithms, that quantifies similarities to hundreds of syndromes. DeepGestalt outperformed clinicians in three initial experiments, two with the goal of distinguishing subjects with a target syndrome from other syndromes, and one of separating different genetic subtypes in Noonan syndrome. On the final experiment reflecting a real clinical setting problem, DeepGestalt achieved 91% top-10 accuracy in identifying the correct syndrome on 502 different images. The model was trained on a dataset of over 17,000 images representing more than 200 syndromes, curated through a community-driven phenotyping platform. DeepGestalt potentially adds considerable value to phenotypic evaluations in clinical genetics, genetic testing, research and precision medicine.


Asunto(s)
Aprendizaje Profundo , Facies , Enfermedades Genéticas Congénitas/diagnóstico , Algoritmos , Genotipo , Humanos , Procesamiento de Imagen Asistido por Computador , Fenotipo , Síndrome
11.
J Gastroenterol ; 51(3): 214-21, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26112122

RESUMEN

BACKGROUND: Early detection of colorectal cancer (CRC) can reduce mortality and morbidity. Current screening methods include colonoscopy and stool tests, but a simple low-cost blood test would increase compliance. This preliminary study assessed the utility of analyzing the entire bio-molecular profile of peripheral blood mononuclear cells (PBMCs) and plasma using Fourier transform infrared (FTIR) spectroscopy for early detection of CRC. METHODS: Blood samples were prospectively collected from 62 candidates for CRC screening/diagnostic colonoscopy or surgery for colonic neoplasia. PBMCs and plasma were separated by Ficoll gradient, dried on zinc selenide slides, and placed under a FTIR microscope. FTIR spectra were analyzed for biomarkers and classified by principal component and discriminant analyses. Findings were compared among diagnostic groups. RESULTS: Significant changes in multiple bands that can serve as CRC biomarkers were observed in PBMCs (p = ~0.01) and plasma (p = ~0.0001) spectra. There were minor but statistically significant differences in both blood components between healthy individuals and patients with benign polyps. Following multivariate analysis, the healthy individuals could be well distinguished from patients with CRC, and the patients with benign polyps were mostly distributed as a distinct subgroup within the overlap region. Leave-one-out cross-validation for evaluating method performance yielded an area under the receiver operating characteristics curve of 0.77, with sensitivity 81.5% and specificity 71.4%. CONCLUSIONS: Joint analysis of the biochemical profile of two blood components rather than a single biomarker is a promising strategy for early detection of CRC. Additional studies are required to validate our preliminary clinical results.


Asunto(s)
Biomarcadores de Tumor/sangre , Neoplasias Colorrectales/diagnóstico , Espectroscopía Infrarroja por Transformada de Fourier/métodos , Adulto , Anciano , Anciano de 80 o más Años , Recolección de Muestras de Sangre/métodos , Colonoscopía , Detección Precoz del Cáncer/métodos , Femenino , Humanos , Leucocitos Mononucleares/química , Masculino , Persona de Mediana Edad , Estudios Prospectivos , Adulto Joven
12.
BMC Cancer ; 15: 408, 2015 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-25975566

RESUMEN

BACKGROUND: Most of the blood tests aiming for breast cancer screening rely on quantification of a single or few biomarkers. The aim of this study was to evaluate the feasibility of detecting breast cancer by analyzing the total biochemical composition of plasma as well as peripheral blood mononuclear cells (PBMCs) using infrared spectroscopy. METHODS: Blood was collected from 29 patients with confirmed breast cancer and 30 controls with benign or no breast tumors, undergoing screening for breast cancer. PBMCs and plasma were isolated and dried on a zinc selenide slide and measured under a Fourier transform infrared (FTIR) microscope to obtain their infrared absorption spectra. Differences in the spectra of PBMCs and plasma between the groups were analyzed as well as the specific influence of the relevant pathological characteristics of the cancer patients. RESULTS: Several bands in the FTIR spectra of both blood components significantly distinguished patients with and without cancer. Employing feature extraction with quadratic discriminant analysis, a sensitivity of ~90 % and a specificity of ~80 % for breast cancer detection was achieved. These results were confirmed by Monte Carlo cross-validation. Further analysis of the cancer group revealed an influence of several clinical parameters, such as the involvement of lymph nodes, on the infrared spectra, with each blood component affected by different parameters. CONCLUSION: The present preliminary study suggests that FTIR spectroscopy of PBMCs and plasma is a potentially feasible and efficient tool for the early detection of breast neoplasms. An important application of our study is the distinction between benign lesions (considered as part of the non-cancer group) and malignant tumors thus reducing false positive results at screening. Furthermore, the correlation of specific spectral changes with clinical parameters of cancer patients indicates for possible contribution to diagnosis and prognosis.


Asunto(s)
Neoplasias de la Mama/diagnóstico , Neoplasias de la Mama/metabolismo , Detección Precoz del Cáncer , Adulto , Anciano , Anciano de 80 o más Años , Biomarcadores de Tumor , Biopsia , Análisis Químico de la Sangre , Neoplasias de la Mama/sangre , Estudios de Casos y Controles , Detección Precoz del Cáncer/métodos , Femenino , Humanos , Leucocitos Mononucleares/metabolismo , Persona de Mediana Edad , Curva ROC , Factores de Riesgo , Espectroscopía Infrarroja por Transformada de Fourier , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...